29 research outputs found
Work the net : Un guide de gestion pour les réseaux formels
Cet ouvrage s'inspire des résultats du projet NeRO, qui a été créé à l'initiative de la GTZ pour étudier les modalités de partage de l'information et de la connaissance entre organisations régionales. Il comprend des études sur la planification, la mise en place et la gestion des réseaux, la participation des décideurs aux réseaux ainsi que sur la communication, le leadership et la culture dans les réseaux
Transcriptional signature of an adult brain tumor in Drosophila.
BACKGROUND: Mutations and gene expression alterations in brain tumors have been extensively investigated, however the causes of brain tumorigenesis are largely unknown. Animal models are necessary to correlate altered transcriptional activity and tumor phenotype and to better understand how these alterations cause malignant growth. In order to gain insights into the in vivo transcriptional activity associated with a brain tumor, we carried out genome-wide microarray expression analyses of an adult brain tumor in Drosophila caused by homozygous mutation in the tumor suppressor gene brain tumor (brat). RESULTS: Two independent genome-wide gene expression studies using two different oligonucleotide microarray platforms were used to compare the transcriptome of adult wildtype flies with mutants displaying the adult bratk06028 mutant brain tumor. Cross-validation and stringent statistical criteria identified a core transcriptional signature of brat(k06028) neoplastic tissue. We find significant expression level changes for 321 annotated genes associated with the adult neoplastic brat(k06028) tissue indicating elevated and aberrant metabolic and cell cycle activity, upregulation of the basal transcriptional machinery, as well as elevated and aberrant activity of ribosome synthesis and translation control. One fifth of these genes show homology to known mammalian genes involved in cancer formation. CONCLUSION: Our results identify for the first time the genome-wide transcriptional alterations associated with an adult brain tumor in Drosophila and reveal insights into the possible mechanisms of tumor formation caused by homozygous mutation of the translational repressor brat.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are
Mixed-precision deep learning based on computational memory
Deep neural networks (DNNs) have revolutionized the field of artificial
intelligence and have achieved unprecedented success in cognitive tasks such as
image and speech recognition. Training of large DNNs, however, is
computationally intensive and this has motivated the search for novel computing
architectures targeting this application. A computational memory unit with
nanoscale resistive memory devices organized in crossbar arrays could store the
synaptic weights in their conductance states and perform the expensive weighted
summations in place in a non-von Neumann manner. However, updating the
conductance states in a reliable manner during the weight update process is a
fundamental challenge that limits the training accuracy of such an
implementation. Here, we propose a mixed-precision architecture that combines a
computational memory unit performing the weighted summations and imprecise
conductance updates with a digital processing unit that accumulates the weight
updates in high precision. A combined hardware/software training experiment of
a multilayer perceptron based on the proposed architecture using a phase-change
memory (PCM) array achieves 97.73% test accuracy on the task of classifying
handwritten digits (based on the MNIST dataset), within 0.6% of the software
baseline. The architecture is further evaluated using accurate behavioral
models of PCM on a wide class of networks, namely convolutional neural
networks, long-short-term-memory networks, and generative-adversarial networks.
Accuracies comparable to those of floating-point implementations are achieved
without being constrained by the non-idealities associated with the PCM
devices. A system-level study demonstrates 173x improvement in energy
efficiency of the architecture when used for training a multilayer perceptron
compared with a dedicated fully digital 32-bit implementation
In-memory Realization of In-situ Few-shot Continual Learning with a Dynamically Evolving Explicit Memory
Continually learning new classes from a few training examples without
forgetting previous old classes demands a flexible architecture with an
inevitably growing portion of storage, in which new examples and classes can be
incrementally stored and efficiently retrieved. One viable architectural
solution is to tightly couple a stationary deep neural network to a dynamically
evolving explicit memory (EM). As the centerpiece of this architecture, we
propose an EM unit that leverages energy-efficient in-memory compute (IMC)
cores during the course of continual learning operations. We demonstrate for
the first time how the EM unit can physically superpose multiple training
examples, expand to accommodate unseen classes, and perform similarity search
during inference, using operations on an IMC core based on phase-change memory
(PCM). Specifically, the physical superposition of a few encoded training
examples is realized via in-situ progressive crystallization of PCM devices.
The classification accuracy achieved on the IMC core remains within a range of
1.28%--2.5% compared to that of the state-of-the-art full-precision baseline
software model on both the CIFAR-100 and miniImageNet datasets when continually
learning 40 novel classes (from only five examples per class) on top of 60 old
classes.Comment: Accepted at the European Solid-state Devices and Circuits Conference
(ESSDERC), September 202